9 research outputs found

    A process pattern model for tackling and improving big data quality

    Get PDF
    Data seldom create value by themselves. They need to be linked and combined from multiple sources, which can often come with variable data quality. The task of improving data quality is a recurring challenge. In this paper, we use a case study of a large telecom company to develop a generic process pattern model for improving data quality. The process pattern model is defined as a proven series of activities, aimed at improving the data quality given a certain context, a particular objective, and a specific set of initial conditions. Four different patterns are derived to deal with the variations in data quality of datasets. Instead of having to find the way to improve the quality of big data for each situation, the process model provides data users with generic patterns, which can be used as a reference model to improve big data quality

    Multi-criteria decision analysis with goal programming in engineering, management and social sciences: a state-of-the art review

    Full text link

    Towards Open Data Quality Improvements Based on Root Cause Analysis of Quality Issues

    No full text
    Part 2: Open Data, Linked Data, and Semantic WebInternational audienceCommercial reuse of open government data in value added services has gained a lot of interest both as practice and as a research topic over the last few years. However, utilizing open data without proper understanding of potential quality issues carries the risk of undermining the value of the service that relies on public sector information. Instead of establishing a data quality assessment framework this research considers a review of typical open data quality issues and intends to connect them to the causes leading to these various data problems. Open data specific problems are concluded from a case study and then theoretical and empirical arguments are used to connect them to root causes emerging from the peculiarities of the public sector data management process. This way both practitioners could be more conscious about appropriate cleansing methods and participants shaping the data management process could aim at eliminating root causes of data quality issues

    An Annotated Bibliography for Post-solution Analysis in Mixed Integer Programming and Combinatorial Optimization

    No full text
    Abstract This annotated bibliography focuses on what has been published since the 1977 Geoffrion-Nauss survey, and it is in BibTEX format, so it can be searched on the World Wide Web. In addition to postoptimal sensitivity analysis, this survey includes debugging a run, such as when the integer program is unbounded, anomalous or infeasible
    corecore